As the complexity of modern software continues to escalate, software engineering has become an increasingly daunting and error-prone endeavor. In recent years, the field of Neural Code Intelligence (NCI) has emerged as a promising solution, leveraging the power of deep learning techniques to tackle analytical tasks on source code with the goal of improving programming efficiency and minimizing human errors within the software industry. Pretrained language models have become a dominant force in NCI research, consistently delivering state-of-the-art results across a wide range of tasks, including code summarization, generation, and translation. In this paper, we present a comprehensive survey of the NCI domain, including a thorough review of pretraining techniques, tasks, datasets, and model architectures. We hope this paper will serve as a bridge between the natural language and programming language communities, offering insights for future research in this rapidly evolving field.
translated by 谷歌翻译
人的大脑位于复杂的神经生物学系统的核心,神经元,电路和子系统以神秘的方式相互作用。长期以来,了解大脑的结构和功能机制一直是神经科学研究和临床障碍疗法的引人入胜的追求。将人脑作为网络的连接映射是神经科学中最普遍的范例之一。图神经网络(GNN)最近已成为建模复杂网络数据的潜在方法。另一方面,深层模型的可解释性低,从而阻止了他们在医疗保健等决策环境中的使用。为了弥合这一差距,我们提出了一个可解释的框架,以分析特定的利益区域(ROI)和突出的联系。提出的框架由两个模块组成:疾病预测的面向脑网络的主链模型和全球共享的解释发生器,该模型突出了包括疾病特异性的生物标志物,包括显着的ROI和重要连接。我们在三个现实世界中的脑疾病数据集上进行实验。结果证明了我们的框架可以获得出色的性能并确定有意义的生物标志物。这项工作的所有代码均可在https://github.com/hennyjie/ibgnn.git上获得。
translated by 谷歌翻译
大脑网络将大脑区域之间的复杂连接性描述为图形结构,这为研究脑连接素提供了强大的手段。近年来,图形神经网络已成为使用结构化数据的普遍学习范式。但是,由于数据获取的成本相对较高,大多数大脑网络数据集的样本量受到限制,这阻碍了足够的培训中的深度学习模型。受元学习的启发,该论文以有限的培训示例快速学习新概念,研究了在跨数据库中分析脑连接组的数据有效培训策略。具体而言,我们建议在大型样本大小的数据集上进行元训练模型,并将知识转移到小数据集中。此外,我们还探索了两种面向脑网络的设计,包括Atlas转换和自适应任务重新启动。与其他训练前策略相比,我们的基于元学习的方法实现了更高和稳定的性能,这证明了我们提出的解决方案的有效性。该框架还能够以数据驱动的方式获得有关数据集和疾病之间相似之处的新见解。
translated by 谷歌翻译
Mapping the connectome of the human brain using structural or functional connectivity has become one of the most pervasive paradigms for neuroimaging analysis. Recently, Graph Neural Networks (GNNs) motivated from geometric deep learning have attracted broad interest due to their established power for modeling complex networked data. Despite their superior performance in many fields, there has not yet been a systematic study of how to design effective GNNs for brain network analysis. To bridge this gap, we present BrainGB, a benchmark for brain network analysis with GNNs. BrainGB standardizes the process by (1) summarizing brain network construction pipelines for both functional and structural neuroimaging modalities and (2) modularizing the implementation of GNN designs. We conduct extensive experiments on datasets across cohorts and modalities and recommend a set of general recipes for effective GNN designs on brain networks. To support open and reproducible research on GNN-based brain network analysis, we host the BrainGB website at https://braingb.us with models, tutorials, examples, as well as an out-of-box Python package. We hope that this work will provide useful empirical evidence and offer insights for future research in this novel and promising direction.
translated by 谷歌翻译
图表无处不在地编码许多域中现实世界对象的关系信息。图形生成的目的是从类似于观察到的图形的分布中生成新图形,由于深度学习模型的最新进展,人们的关注越来越大。在本文中,我们对现有的图形生成文献进行了全面综述,从各种新兴方法到其广泛的应用领域。具体来说,我们首先提出了深图生成的问题,并与几个相关的图形学习任务讨论了它的差异。其次,我们根据模型架构将最新方法分为三类,并总结其生成策略。第三,我们介绍了深图生成的三个关键应用领域。最后,我们重点介绍了深图生成的未来研究中的挑战和机遇。
translated by 谷歌翻译
近年来,多媒体推荐的兴趣日益增长,旨在预测用户是否会与具有多模式内容的项目进行交互。以前的研究侧重于建模用户项目与包含作为侧面信息的多模式特征的交互。但是,该方案并不适用于多媒体推荐。首先,只有通过高阶项 - 用户项共同发生隐含地建模协作项目 - 项目关系。我们认为这些多模式内容的潜在语义项 - 项目结构可以有利于学习更好的项目表示,并协助推荐模型全面发现候选项目。其次,以前的研究忽视了细粒度的多峰融合。虽然访问多种方式可能允许我们捕获丰富的信息,但我们认为以前的工作中的线性组合或连接的简单粗粒融合不足以完全理解内容信息和项目关系。在此结束时,我们提出了一个潜在的结构采用对比模型融合方法(微型简洁性)。具体而言,我们设计了一种新型的模态感知结构学习模块,它为每个模态学习项目项目关系。基于学习的模态感知潜在项目关系,我们执行明确地将物品关联的图形卷评进行了模当感知的项目表示。然后,我们设计一种新颖的对比方法来保险熔断多模峰特征。这些丰富的项目表示可以插入现有的协作过滤方法,以便更准确的建议。关于现实世界数据集的广泛实验证明了我们在最先进的基线上的方法的优越性。
translated by 谷歌翻译
多视网网络嵌入旨在将网络中的节点投射到低维向量,同时保留其多重关系和属性信息。对比学习方法在这项任务中表现出了有希望的表现。但是,他们忽略了融合和视图表示之间的语义一致性,并且难以建模不同观点之间的互补信息。为了处理这些缺陷,这项工作为多视网网络嵌入(Creme)提供了一个新颖的对比学习框架。在我们的工作中,可以根据节点之间的各种关系来获得不同的观点。然后,我们通过适当的视图编码器生成视图嵌入,并利用细心的多视聚机来融合这些表示。特别是,我们设计了两个协作对比目标,即查看融合信息和视图中的信息,以自我监督的方式训练该模型。前目标将信息从不同观点产生的嵌入中提炼出来,而后者则捕获了观点之间的互补信息,以促进独特的视图嵌入。我们还表明,这两个目标可以统一为模型培训的一个目标。在三个现实世界数据集上进行的广泛实验表明,我们提出的奶油能够始终如一地超过最先进的方法。
translated by 谷歌翻译
Recently, contrastive learning (CL) has emerged as a successful method for unsupervised graph representation learning. Most graph CL methods first perform stochastic augmentation on the input graph to obtain two graph views and maximize the agreement of representations in the two views. Despite the prosperous development of graph CL methods, the design of graph augmentation schemes-a crucial component in CL-remains rarely explored. We argue that the data augmentation schemes should preserve intrinsic structures and attributes of graphs, which will force the model to learn representations that are insensitive to perturbation on unimportant nodes and edges. However, most existing methods adopt uniform data augmentation schemes, like uniformly dropping edges and uniformly shuffling features, leading to suboptimal performance. In this paper, we propose a novel graph contrastive representation learning method with adaptive augmentation that incorporates various priors for topological and semantic aspects of the graph. Specifically, on the topology level, we design augmentation schemes based on node centrality measures to highlight important connective structures. On the node attribute level, we corrupt node features by adding more noise to unimportant node features, to enforce the model to recognize underlying semantic information. We perform extensive experiments of node classification on a variety of real-world datasets. Experimental results demonstrate that our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts, which validates the effectiveness of the proposed contrastive framework with adaptive augmentation. CCS CONCEPTS• Computing methodologies → Unsupervised learning; Neural networks; Learning latent representations.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译